73 research outputs found

    Passive characterization of sopcast usage in residential ISPs

    Get PDF
    Abstract—In this paper we present an extensive analysis of traffic generated by SopCast users and collected from operative networks of three national ISPs in Europe. After more than a year of continuous monitoring, we present results about the popularity of SopCast which is the largely preferred application in the studied networks. We focus on analysis of (i) application and bandwidth usage at different time scales, (ii) peer lifetime, arrival and departure processes, (iii) peer localization in the world. Results provide useful insights into users ’ behavior, including their attitude towards P2P-TV application usage and the conse-quent generated load on the network, that is quite variable based on the access technology and geographical location. Our findings are interesting to Researchers interested in the investigation of users ’ attitude towards P2P-TV services, to foresee new trends in the future usage of the Internet, and to augment the design of their application. I

    Inferring Network Usage from Passive Measurements in ISP Networks: Bringing Visibility of the Network to Internet Operators

    Get PDF
    The Internet is evolving with us along the time, nowadays people are more dependent of it, being used for most of the simple activities of their lives. It is not uncommon use the Internet for voice and video communications, social networking, banking and shopping. Current trends in Internet applications such as Web 2.0, cloud computing, and the internet of things are bound to bring higher traffic volume and more heterogeneous traffic. In addition, privacy concerns and network security traits have widely promoted the usage of encryption on the network communications. All these factors make network management an evolving environment that becomes every day more difficult. This thesis focuses on helping to keep track on some of these changes, observing the Internet from an ISP viewpoint and exploring several aspects of the visibility of a network, giving insights on what contents or services are retrieved by customers and how these contents are provided to them. Generally, inferring these information, it is done by means of characterization and analysis of data collected using passive traffic monitoring tools on operative networks. As said, analysis and characterization of traffic collected passively is challenging. Internet end-users are not controlled on the network traffic they generate. Moreover, this traffic in the network might be encrypted or coded in a way that is unfeasible to decode, creating the need for reverse engineering for providing a good picture to the Internet operator. In spite of the challenges, it is presented a characterization of P2P-TV usage of a commercial, proprietary and closed application, that encrypts or encodes its traffic, making quite difficult discerning what is going on by just observing the data carried by the protocol. Then it is presented DN-Hunter, which is an application for rendering visible a great part of the network traffic even when encryption or encoding is available. Finally, it is presented a case study of DNHunter for understanding Amazon Web Services, the most prominent cloud provider that offers computing, storage, and content delivery platforms. In this paper is unveiled the infrastructure, the pervasiveness of content and their traffic allocation policies. Findings reveal that most of the content residing on cloud computing and Internet storage infrastructures is served by one single Amazon datacenter located in Virginia despite it appears to be the worst performing one for Italian users. This causes traffic to take long and expensive paths in the network. Since no automatic migration and load-balancing policies are offered by AWS among different locations, content is exposed to outages, as it is observed in the datasets presented

    Exploring the cloud from passive measurements: The Amazon AWS case

    Get PDF
    This paper presents a characterization of Amazon's Web Services (AWS), the most prominent cloud provider that offers computing, storage, and content delivery platforms. Leveraging passive measurements, we explore the EC2, S3 and CloudFront AWS services to unveil their infrastructure, the pervasiveness of content they host, and their traffic allocation policies. Measurements reveal that most of the content residing on EC2 and S3 is served by one Amazon datacenter, located in Virginia, which appears to be the worst performing one for Italian users. This causes traffic to take long and expensive paths in the network. Since no automatic migration and load-balancing policies are offered by AWS among different locations, content is exposed to the risks of outages. The CloudFront CDN, on the contrary, shows much better performance thanks to the effective cache selection policy that serves 98% of the traffic from the nearest available cache. CloudFront exhibits also dynamic load-balancing policies, in contrast to the static allocation of instances on EC2 and S3. Information presented in this paper will be useful for developers aiming at entrusting AWS to deploy their contents, and for researchers willing to improve cloud desig

    A Distributed Architecture for the Monitoring of Clouds and CDNs: Applications to Amazon AWS

    Get PDF
    Clouds and CDNs are systems that tend to separate the content being requested by users from the physical servers capable of serving it. From the network point of view, monitoring and optimizing performance for the traffic they generate are challenging tasks, given that the same resource can be located in multiple places, which can, in turn, change at any time. The first step in understanding cloud and CDN systems is thus the engineering of a monitoring platform. In this paper, we propose a novel solution that combines passive and active measurements and whose workflow has been tailored to specifically characterize the traffic generated by cloud and CDN infrastructures. We validate our platform by performing a longitudinal characterization of the very well known cloud and CDN infrastructure provider Amazon Web Services (AWS). By observing the traffic generated by more than 50 000 Internet users of an Italian Internet Service Provider, we explore the EC2, S3, and CloudFront AWS services, unveiling their infrastructure, the pervasiveness of web services they host, and their traffic allocation policies as seen from our vantage points. Most importantly, we observe their evolution over a two-year-long period. The solution provided in this paper can be of interest for the following: 1) developers aiming at building measurement tools for cloud infrastructure providers; 2) developers interested in failure and anomaly detection systems; and 3) third-party service-level agreement certificators who can design systems to independently monitor performance. Finally, we believe that the results about AWS presented in this paper are interes

    DNS to the rescue: Discerning Content and Services in a Tangled Web

    Get PDF
    A careful perusal of the Internet evolution reveals two major trends - explosion of cloud-based services and video stream- ing applications. In both of the above cases, the owner (e.g., CNN, YouTube, or Zynga) of the content and the organiza- tion serving it (e.g., Akamai, Limelight, or Amazon EC2) are decoupled, thus making it harder to understand the asso- ciation between the content, owner, and the host where the content resides. This has created a tangled world wide web that is very hard to unwind, impairing ISPs' and network ad- ministrators' capabilities to control the traffic flowing on the network. In this paper, we present DN-Hunter, a system that lever- ages the information provided by DNS traffic to discern the tangle. Parsing through DNS queries, DN-Hunter tags traffic flows with the associated domain name. This association has several applications and reveals a large amount of useful in- formation: (i) Provides a fine-grained traffic visibility even when the traffic is encrypted (i.e., TLS/SSL flows), thus en- abling more effective policy controls, (ii) Identifies flows even before the flows begin, thus providing superior net- work management capabilities to administrators, (iii) Un- derstand and track (over time) different CDNs and cloud providers that host content for a particular resource, (iv) Discern all the services/content hosted by a given CDN or cloud provider in a particular geography and time, and (v) Provides insights into all applications/services running on any given layer-4 port number. We conduct extensive experimental analysis and show that the results from real traffic traces, ranging from FTTH to 4G ISPs, that support our hypothesis. Simply put, the informa- tion provided by DNS traffic is one of the key components required to unveil the tangled web, and bring the capabilities of controlling the traffic back to the network carrier

    Capacitance Measurements for Subcell Characterization in Multijunction Solar Cells.

    Get PDF
    On this paper we present an alternative way to analyze de electronic properties of each subcell from the complete device. By illuminating the cell with light sources which energy is near one of the subcell bandgaps, it is possible to “erase” the presence of such subcell on the CV curve. The main advantages of this technique are that it is not destructive, it can be measured on the complete cell so can be easily implemented as a diagnostic technique for controlling electronic deviations

    Automatic parsing of binary-based application protocols using network traffic

    Get PDF
    A method for analyzing a binary-based application protocol of a network. The method includes obtaining conversations from the network, extracting content of a candidate field from a message in each conversation, calculating a randomness measure of the content to represent a level of randomness of the content across all conversation, calculating a correlation measure of the content to represent a level of correlation, across all of conversations, between the content and an attribute of a corresponding conversation where the message containing the candidate field is located, and selecting, based on the randomness measure and the correlation measure, and using a pre-determined field selection criterion, the candidate offset from a set of candidate offsets as the offset defined by the protocol

    The impact of port governance and infrastructures on maritime containerized trade on the West Coast of Latin America

    Get PDF
    Latin American countries have historically had a strong dependence on trade, and are mostly characterized by being exporters of raw materials and importers of manufactured products. This fact has brought about a less negative impact of the world crisis on economic growth, mainly because of the high prices of raw materials. This paper focuses on this geographical area (the West Coast of Latin America) between 2008 and 2015, and adds to the literature by assessing institutional, port-related and economic factors that influence maritime transport. The analysis makes use of panel data models with fixed and random effects where the Hausman test has been applied in order to define a solid specification of all the ports, as well as to discount the particular peculiarities of each country. It is shown that the analysis of maritime transport requires the analysis of a number of variables apart from trade (volume of TEUs), infrastructures, superstructures (number of calls, gantry cranes), and that other variables, such as port governance, which are sometimes difficult to quantify, need also to be taken into account

    Open-Phase Fault Operation of 5-Phase Induction Motor Drives using DTC Techniques

    Get PDF
    Direct torque control (DTC) is extensively used in conventional three-phase drives as an alternative to field-oriented control methods. The standard DTC technique was originally designed to regulate two independent variables using hysteresis controllers. Recent works have extended the procedure for five-phase drives in healthy operation accounting for the additional degrees of freedom. Although one of the main advantages of multiphase machines is the ability to continue the operation in faulty conditions, the utility of DTC after the appearance of a fault has not been covered in the literature yet. This paper analyses the operation of a five-phase induction motor drive in faulty situation using a DTC controller. An open-phase fault condition is considered, and simulation results are provided to study the performance of the drive, comparing with the behavior during healthy state
    corecore